The attached workshop deck is an AI coding primer, not a validation workflow. Its first move is to give developers a shared operating model: when to use Copilot vs Claude Code, how model choice affects cost and output quality, and why prompt scope plus context hygiene determine whether an agent helps or thrashes.
The deck positions them as complementary, not interchangeable. Copilot starts inside the IDE and shines on in-flow coding. Claude Code starts with the whole repository in scope and is built for terminal-first, multi-file execution.
VS Code and JetBrains workflows map cleanly to Copilot’s Ask, Edit, Agent, and Plan modes. It is the natural fit for inline edits, scoped file changes, and backlog items that can be assigned and returned as PRs.
The CLI workflow assumes the agent can inspect the codebase, run commands, execute tests, and work across the repository from the start. That makes it stronger for coordinated edits, refactors, and tooling-heavy tasks.
Ask mode is for explanation. Edit mode is for controlled diffs. Agent mode is for autonomous search, edits, and terminal loops. Plan mode exists to make multi-file work reviewable before a single line changes.
If the task is local, stay local. If the task needs shell commands, repo search, tests, or orchestration, step up to a true coding agent. The deck frames tool selection as matching workflow shape, not brand preference.
| GitHub Copilot | Claude Code | |
|---|---|---|
| Lives in | VS Code, JetBrains, Eclipse, Xcode | Terminal (CLI) + IDE extensions |
| Interaction | Ask, Edit, Agent modes + Chat UI | Conversational prompts + planning |
| Scope | File → multi-file → full repo (agent) | Multi-file, full codebase from start |
| Models | GPT-5.4, Claude Sonnet 4.6, Gemini | Sonnet 4.6 (default) / Opus 4.6 |
| Runs code? | Yes — agent mode runs terminal | Yes — shell, tests, git, MCP tools |
| Plans ahead? | Yes — Plan mode (review before code) | Yes — explicit planning mode |
| Context | Model-dependent (128K–1M) | 1M tokens (Opus 4.6 / Sonnet 4.6) |
| Pricing | $10–$39/mo subscription | Usage-based (API tokens) |
The workshop gives a clear recommendation: use Sonnet 4.6 as the daily driver, reserve Opus 4.6 for deeper reasoning and coordinated agent work, and treat GPT-5.4 as a strong Copilot-side option when you want premium request quality inside the IDE.
1M-token context and strong coding performance at a much lower price point than Opus. The deck explicitly recommends it for most day-to-day development sessions.
Also 1M-token context, but priced for the hard cases: deeper reasoning, complex refactors, and multi-agent coordination where mistakes are expensive.
The deck calls out GPT-5.4 as a strong premium model in Copilot for computer-use style tasks, tool search, and higher-end assisted coding inside the editor flow.
The workshop treats long context windows as a practical advantage only when the team manages that context deliberately.
Still large enough for serious work, but not a license to keep every file, tab, and transcript open forever.
The deck’s economic point is blunt: save Opus for the sessions where extra reasoning actually pays for itself.
| Model | Context | Max Output | Pricing (in/out) | Best For |
|---|---|---|---|---|
| Claude Opus 4.6 | 1M tokens | 128K output | $15 / $75 per MTok | Frontier reasoning, agent teams, complex refactors |
| Claude Sonnet 4.6 | 1M tokens | 64K output | $3 / $15 per MTok | Default in Claude Code. 98% of Opus coding perf. |
| Claude Haiku 4.5 | 200K tokens | 8K output | $0.80 / $4 per MTok | Fast subagents, exploration, lightweight tasks |
| Model | Context | Pricing | Notes |
|---|---|---|---|
| GPT-5.4 | 272K tokens | $10 / $30 per MTok | Default for premium requests. Computer use, tool search. |
| Claude Sonnet 4.6 | 1M tokens | $3 / $15 per MTok | Available in Copilot Pro+. Strong agentic coding. |
| Gemini 2.5 Pro | 1M tokens | Varies | Alternative for large-context tasks. |
CRISP is the workshop’s shared structure for prompting coding agents. It keeps requests concrete, bounded, and aligned to the codebase instead of drifting into vague “fix this” conversations.
Name the stack, framework, constraints, and the relevant part of the codebase. “TypeScript/React 19 monorepo with Vitest” is usable context. “Fix the bug” is not.
Tell the model how to think about the task: senior backend engineer, security reviewer, test engineer, release owner. The role shapes judgment and review criteria.
State the desired outcome explicitly: refactor to async/await, fix the 401 in middleware, generate unit tests, explain the architecture. Intent keeps the agent from inventing its own target.
Tell the agent what to touch and what not to touch. The deck emphasizes negative constraints because they prevent “helpful” surprise edits across unrelated files.
Reference the style, conventions, or examples to follow. If the repo already has an error-handling pattern, naming scheme, or test style, point the model at it directly.
The workshop’s “bad vs good” example is making a narrow point: better prompting is mostly better scoping. A strong coding prompt describes the codebase, the goal, the boundaries, and the pattern to imitate before the agent writes anything.
[Context] TypeScript/Express API, Node 22, vitest. [Role] Act as a senior backend engineer. [Intent] Fix the 401 error in src/auth/middleware.ts. [Scope] Only middleware.ts. Don’t touch test files. [Pattern] Use the same try/catch style as utils/errors.ts.
If the task prompt keeps repeating repository rules, move those rules into CLAUDE.md or a skill file. Keep the live prompt focused on the current job, not boilerplate the model has already been taught.
The workshop is explicit about what fills the window: system prompts, instruction files, opened files, chat history, tool results, and the new request itself. Bigger windows help, but noisy sessions still degrade output.
Large files, unrelated tabs, and stale examples cost tokens and attention. The deck treats focused file selection as a quality control mechanism, not just an optimization.
Claude Code’s compaction feature exists because every prior turn otherwise keeps shaping the next answer. New task, new summary, or new session.
The model needs the key error lines and stack trace, not a two-thousand-line terminal transcript. Distillation preserves budget and improves signal.
Phase 03 — Design + Build. Development tooling and agent workflows live in the implementation phase of the SDLC.
Scroll right to see full pipeline →